Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 68
Filtrar
1.
Transl Vis Sci Technol ; 13(4): 1, 2024 Apr 02.
Artigo em Inglês | MEDLINE | ID: mdl-38564203

RESUMO

Purpose: The purpose of this study was to develop a deep learning algorithm, to detect retinal breaks and retinal detachments on ultra-widefield fundus (UWF) optos images using artificial intelligence (AI). Methods: Optomap UWF images of the database were annotated to four groups by two retina specialists: (1) retinal breaks without detachment, (2) retinal breaks with retinal detachment, (3) retinal detachment without visible retinal breaks, and (4) a combination of groups 1 to 3. The fundus image data set was split into a training set and an independent test set following an 80% to 20% ratio. Image preprocessing methods were applied. An EfficientNet classification model was trained with the training set and evaluated with the test set. Results: A total of 2489 UWF images were included into the dataset, resulting in a training set size of 2008 UWF images and a test set size of 481 images. The classification models achieved an area under the receiver operating characteristic curve (AUC) on the testing set of 0.975 regarding lesion detection, an AUC of 0.972 for retinal detachment and an AUC of 0.913 for retinal breaks. Conclusions: A deep learning system to detect retinal breaks and retinal detachment using UWF images is feasible and has a good specificity. This is relevant for clinical routine as there can be a high rate of missed breaks in clinics. Future clinical studies will be necessary to evaluate the cost-effectiveness of applying such an algorithm as an automated auxiliary tool in a large practices or tertiary referral centers. Translational Relevance: This study demonstrates the relevance of applying AI in diagnosing peripheral retinal breaks in clinical routine in UWF fundus images.


Assuntos
Aprendizado Profundo , Descolamento Retiniano , Perfurações Retinianas , Humanos , Descolamento Retiniano/diagnóstico , Inteligência Artificial , Fotografação
2.
Sci Data ; 11(1): 373, 2024 Apr 12.
Artigo em Inglês | MEDLINE | ID: mdl-38609405

RESUMO

In recent years, the landscape of computer-assisted interventions and post-operative surgical video analysis has been dramatically reshaped by deep-learning techniques, resulting in significant advancements in surgeons' skills, operation room management, and overall surgical outcomes. However, the progression of deep-learning-powered surgical technologies is profoundly reliant on large-scale datasets and annotations. In particular, surgical scene understanding and phase recognition stand as pivotal pillars within the realm of computer-assisted surgery and post-operative assessment of cataract surgery videos. In this context, we present the largest cataract surgery video dataset that addresses diverse requisites for constructing computerized surgical workflow analysis and detecting post-operative irregularities in cataract surgery. We validate the quality of annotations by benchmarking the performance of several state-of-the-art neural network architectures for phase recognition and surgical scene segmentation. Besides, we initiate the research on domain adaptation for instrument segmentation in cataract surgery by evaluating cross-domain instrument segmentation performance in cataract surgery videos. The dataset and annotations are publicly available in Synapse.


Assuntos
Extração de Catarata , Catarata , Aprendizado Profundo , Gravação em Vídeo , Humanos , Benchmarking , Redes Neurais de Computação , Extração de Catarata/métodos
3.
IEEE Trans Med Imaging ; PP2024 Apr 19.
Artigo em Inglês | MEDLINE | ID: mdl-38640052

RESUMO

In Ultrasound Localization Microscopy (ULM), achieving high-resolution images relies on the precise localization of contrast agent particles across a series of beamformed frames. However, our study uncovers an enormous potential: The process of delay-and-sum beamforming leads to an irreversible reduction of Radio-Frequency (RF) channel data, while its implications for localization remain largely unexplored. The rich contextual information embedded within RF wavefronts, including their hyperbolic shape and phase, offers great promise for guiding Deep Neural Networks (DNNs) in challenging localization scenarios. To fully exploit this data, we propose to directly localize scatterers in RF channel data. Our approach involves a custom super-resolution DNN using learned feature channel shuffling, non-maximum suppression, and a semi-global convolutional block for reliable and accurate wavefront localization. Additionally, we introduce a geometric point transformation that facilitates seamless mapping to the B-mode coordinate space. To understand the impact of beamforming on ULM, we validate the effectiveness of our method by conducting an extensive comparison with State-Of-The-Art (SOTA) techniques. We present the inaugural in vivo results from a wavefront-localizing DNN, highlighting its real-world practicality. Our findings show that RF-ULM bridges the domain shift between synthetic and real datasets, offering a considerable advantage in terms of precision and complexity. To enable the broader research community to benefit from our findings, our code and the associated SOTA methods are made available at https://github.com/hahnec/rf-ulm.

4.
Ophthalmologica ; 2024 Mar 29.
Artigo em Inglês | MEDLINE | ID: mdl-38555632

RESUMO

INTRODUCTION: The aim of this study is to investigate the role of an artificial intelligence (AI)-developed OCT program to predict the clinical course of central serous chorioretinopathy (CSC ) based on baseline pigment epithelium detachment (PED) features. METHODS: Single-center, observational study with a retrospective design. Treatment-naïve patients with acute CSC and chronic CSC were recruited and OCTs were analyzed by an AI-developed platform (Discovery OCT Fluid and Biomarker Detector, RetinAI AG, Switzerland), providing automatic detection and volumetric quantification of PEDs. Flat irregular PED presence was annotated manually and afterwards measured by the AI program automatically. RESULTS: 115 eyes of 101 patients with CSC were included, of which 70 were diagnosed with chronic CSC and 45 with acute CSC. It was found that patients with baseline presence of foveal flat PEDs and multiple flat foveal and extrafoveal PEDs had a higher chance of developing chronic form. AI-based volumetric analysis revealed no significant differences between the groups. CONCLUSIONS: While more evidence is needed to confirm the effectiveness of AI-based PED quantitative analysis, this study highlights the significance of identifying flat irregular PEDs at the earliest stage possible in patients with CSC, to optimize patient management and long-term visual outcomes.

5.
Artigo em Inglês | MEDLINE | ID: mdl-38189905

RESUMO

PURPOSE: Semantic segmentation plays a pivotal role in many applications related to medical image and video analysis. However, designing a neural network architecture for medical image and surgical video segmentation is challenging due to the diverse features of relevant classes, including heterogeneity, deformability, transparency, blunt boundaries, and various distortions. We propose a network architecture, DeepPyramid+, which addresses diverse challenges encountered in medical image and surgical video segmentation. METHODS: The proposed DeepPyramid+ incorporates two major modules, namely "Pyramid View Fusion" (PVF) and "Deformable Pyramid Reception" (DPR), to address the outlined challenges. PVF replicates a deduction process within the neural network, aligning with the human visual system, thereby enhancing the representation of relative information at each pixel position. Complementarily, DPR introduces shape- and scale-adaptive feature extraction techniques using dilated deformable convolutions, enhancing accuracy and robustness in handling heterogeneous classes and deformable shapes. RESULTS: Extensive experiments conducted on diverse datasets, including endometriosis videos, MRI images, OCT scans, and cataract and laparoscopy videos, demonstrate the effectiveness of DeepPyramid+ in handling various challenges such as shape and scale variation, reflection, and blur degradation. DeepPyramid+ demonstrates significant improvements in segmentation performance, achieving up to a 3.65% increase in Dice coefficient for intra-domain segmentation and up to a 17% increase in Dice coefficient for cross-domain segmentation. CONCLUSIONS: DeepPyramid+ consistently outperforms state-of-the-art networks across diverse modalities considering different backbone networks, showcasing its versatility. Accordingly, DeepPyramid+ emerges as a robust and effective solution, successfully overcoming the intricate challenges associated with relevant content segmentation in medical images and surgical videos. Its consistent performance and adaptability indicate its potential to enhance precision in computerized medical image and surgical video analysis applications.

6.
Retina ; 44(2): 316-323, 2024 Feb 01.
Artigo em Inglês | MEDLINE | ID: mdl-37883530

RESUMO

PURPOSE: To identify optical coherence tomography (OCT) features to predict the course of central serous chorioretinopathy (CSC) with an artificial intelligence-based program. METHODS: Multicenter, observational study with a retrospective design. Treatment-naïve patients with acute CSC and chronic CSC were enrolled. Baseline OCTs were examined by an artificial intelligence-developed platform (Discovery OCT Fluid and Biomarker Detector, RetinAI AG, Switzerland). Through this platform, automated retinal layer thicknesses and volumes, including intaretinal and subretinal fluid, and pigment epithelium detachment were measured. Baseline OCT features were compared between acute CSC and chronic CSC patients. RESULTS: One hundred and sixty eyes of 144 patients with CSC were enrolled, of which 100 had chronic CSC and 60 acute CSC. Retinal layer analysis of baseline OCT scans showed that the inner nuclear layer, the outer nuclear layer, and the photoreceptor-retinal pigmented epithelium complex were significantly thicker at baseline in eyes with acute CSC in comparison with those with chronic CSC ( P < 0.001). Similarly, choriocapillaris and choroidal stroma and retinal thickness (RT) were thicker in acute CSC than chronic CSC eyes ( P = 0.001). Volume analysis revealed average greater subretinal fluid volumes in the acute CSC group in comparison with chronic CSC ( P = 0.041). CONCLUSION: Optical coherence tomography features may be helpful to predict the clinical course of CSC. The baseline presence of an increased thickness in the outer retinal layers, choriocapillaris and choroidal stroma, and subretinal fluid volume seems to be associated with acute course of the disease.


Assuntos
Coriorretinopatia Serosa Central , Humanos , Coriorretinopatia Serosa Central/diagnóstico , Tomografia de Coerência Óptica/métodos , Estudos Retrospectivos , Inteligência Artificial , Retina , Angiofluoresceinografia
7.
Sci Rep ; 13(1): 19667, 2023 11 11.
Artigo em Inglês | MEDLINE | ID: mdl-37952011

RESUMO

Recent developments in deep learning have shown success in accurately predicting the location of biological markers in Optical Coherence Tomography (OCT) volumes of patients with Age-Related Macular Degeneration (AMD) and Diabetic Retinopathy (DR). We propose a method that automatically locates biological markers to the Early Treatment Diabetic Retinopathy Study (ETDRS) rings, only requiring B-scan-level presence annotations. We trained a neural network using 22,723 OCT B-Scans of 460 eyes (433 patients) with AMD and DR, annotated with slice-level labels for Intraretinal Fluid (IRF) and Subretinal Fluid (SRF). The neural network outputs were mapped into the corresponding ETDRS rings. We incorporated the class annotations and domain knowledge into a loss function to constrain the output with biologically plausible solutions. The method was tested on a set of OCT volumes with 322 eyes (189 patients) with Diabetic Macular Edema, with slice-level SRF and IRF presence annotations for the ETDRS rings. Our method accurately predicted the presence of IRF and SRF in each ETDRS ring, outperforming previous baselines even in the most challenging scenarios. Our model was also successfully applied to en-face marker segmentation and showed consistency within C-scans, despite not incorporating volume information in the training process. We achieved a correlation coefficient of 0.946 for the prediction of the IRF area.


Assuntos
Retinopatia Diabética , Degeneração Macular , Edema Macular , Humanos , Retinopatia Diabética/diagnóstico por imagem , Edema Macular/diagnóstico por imagem , Tomografia de Coerência Óptica/métodos , Degeneração Macular/diagnóstico por imagem , Biomarcadores
8.
Eur J Radiol ; 167: 111047, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-37690351

RESUMO

PURPOSE: To evaluate the effectiveness of automated liver segmental volume quantification and calculation of the liver segmental volume ratio (LSVR) on a non-contrast T1-vibe Dixon liver MRI sequence using a deep learning segmentation pipeline. METHOD: A dataset of 200 liver MRI with a non-contrast 3 mm T1-vibe Dixon sequence was manually labeledslice-by-sliceby an expert for Couinaud liver segments, while portal and hepatic veins were labeled separately. A convolutional neural networkwas trainedusing 170 liver MRI for training and 30 for evaluation. Liver segmental volumes without liver vessels were retrieved and LSVR was calculated as the liver segmental volumes I-III divided by the liver segmental volumes IV-VIII. LSVR was compared with the expert manual LSVR calculation and the LSVR calculated on CT scans in 30 patients with CT and MRI within 6 months. RESULTS: Theconvolutional neural networkclassified the Couinaud segments I-VIII with an average Dice score of 0.770 ± 0.03, ranging between 0.726 ± 0.13 (segment IVb) and 0.810 ± 0.09 (segment V). The calculated mean LSVR with liver MRI unseen by the model was 0.32 ± 0.14, as compared with manually quantified LSVR of 0.33 ± 0.15, resulting in a mean absolute error (MAE) of 0.02. A comparable LSVR of 0.35 ± 0.14 with a MAE of 0.04 resulted with the LSRV retrieved from the CT scans. The automated LSVR showed significant correlation with the manual MRI LSVR (Spearman r = 0.97, p < 0.001) and CT LSVR (Spearman r = 0.95, p < 0.001). CONCLUSIONS: A convolutional neural network allowed for accurate automated liver segmental volume quantification and calculation of LSVR based on a non-contrast T1-vibe Dixon sequence.


Assuntos
Aprendizado Profundo , Humanos , Fígado/diagnóstico por imagem , Radiografia , Cintilografia , Imageamento por Ressonância Magnética
9.
Sci Rep ; 13(1): 16417, 2023 09 29.
Artigo em Inglês | MEDLINE | ID: mdl-37775538

RESUMO

Polarimetry is an optical characterization technique capable of analyzing the polarization state of light reflected by materials and biological samples. In this study, we investigate the potential of Müller matrix polarimetry (MMP) to analyze fresh pancreatic tissue samples. Due to its highly heterogeneous appearance, pancreatic tissue type differentiation is a complex task. Furthermore, its challenging location in the body makes creating direct imaging difficult. However, accurate and reliable methods for diagnosing pancreatic diseases are critical for improving patient outcomes. To this end, we measured the Müller matrices of ex-vivo unfixed human pancreatic tissue and leverage the feature-learning capabilities of a machine-learning model to derive an optimized data representation that minimizes normal-abnormal classification error. We show experimentally that our approach accurately differentiates between normal and abnormal pancreatic tissue. This is, to our knowledge, the first study to use ex-vivo unfixed human pancreatic tissue combined with feature-learning from raw Müller matrix readings for this purpose.


Assuntos
Diagnóstico por Imagem , Humanos , Diagnóstico por Imagem/métodos , Análise Espectral
10.
Int J Comput Assist Radiol Surg ; 18(6): 1085-1091, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-37133678

RESUMO

PURPOSE: A fundamental problem in designing safe machine learning systems is identifying when samples presented to a deployed model differ from those observed at training time. Detecting so-called out-of-distribution (OoD) samples is crucial in safety-critical applications such as robotically guided retinal microsurgery, where distances between the instrument and the retina are derived from sequences of 1D images that are acquired by an instrument-integrated optical coherence tomography (iiOCT) probe. METHODS: This work investigates the feasibility of using an OoD detector to identify when images from the iiOCT probe are inappropriate for subsequent machine learning-based distance estimation. We show how a simple OoD detector based on the Mahalanobis distance can successfully reject corrupted samples coming from real-world ex vivo porcine eyes. RESULTS: Our results demonstrate that the proposed approach can successfully detect OoD samples and help maintain the performance of the downstream task within reasonable levels. MahaAD outperformed a supervised approach trained on the same kind of corruptions and achieved the best performance in detecting OoD cases from a collection of iiOCT samples with real-world corruptions. CONCLUSION: The results indicate that detecting corrupted iiOCT data through OoD detection is feasible and does not need prior knowledge of possible corruptions. Consequently, MahaAD could aid in ensuring patient safety during robotically guided microsurgery by preventing deployed prediction models from estimating distances that put the patient at risk.


Assuntos
Microcirurgia , Retina , Animais , Suínos , Microcirurgia/métodos , Retina/diagnóstico por imagem , Retina/cirurgia , Aprendizado de Máquina , Tomografia de Coerência Óptica/métodos
11.
Int J Comput Assist Radiol Surg ; 18(7): 1185-1192, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37184768

RESUMO

PURPOSE: Surgical scene understanding plays a critical role in the technology stack of tomorrow's intervention-assisting systems in endoscopic surgeries. For this, tracking the endoscope pose is a key component, but remains challenging due to illumination conditions, deforming tissues and the breathing motion of organs. METHOD: We propose a solution for stereo endoscopes that estimates depth and optical flow to minimize two geometric losses for camera pose estimation. Most importantly, we introduce two learned adaptive per-pixel weight mappings that balance contributions according to the input image content. To do so, we train a Deep Declarative Network to take advantage of the expressiveness of deep learning and the robustness of a novel geometric-based optimization approach. We validate our approach on the publicly available SCARED dataset and introduce a new in vivo dataset, StereoMIS, which includes a wider spectrum of typically observed surgical settings. RESULTS: Our method outperforms state-of-the-art methods on average and more importantly, in difficult scenarios where tissue deformations and breathing motion are visible. We observed that our proposed weight mappings attenuate the contribution of pixels on ambiguous regions of the images, such as deforming tissues. CONCLUSION: We demonstrate the effectiveness of our solution to robustly estimate the camera pose in challenging endoscopic surgical scenes. Our contributions can be used to improve related tasks like simultaneous localization and mapping (SLAM) or 3D reconstruction, therefore advancing surgical scene understanding in minimally invasive surgery.


Assuntos
Algoritmos , Imageamento Tridimensional , Humanos , Imageamento Tridimensional/métodos , Endoscopia/métodos , Procedimentos Cirúrgicos Minimamente Invasivos/métodos , Endoscópios
12.
Med Image Anal ; 87: 102822, 2023 07.
Artigo em Inglês | MEDLINE | ID: mdl-37182321

RESUMO

Recent advances in machine learning models have greatly increased the performance of automated methods in medical image analysis. However, the internal functioning of such models is largely hidden, which hinders their integration in clinical practice. Explainability and trust are viewed as important aspects of modern methods, for the latter's widespread use in clinical communities. As such, validation of machine learning models represents an important aspect and yet, most methods are only validated in a limited way. In this work, we focus on providing a richer and more appropriate validation approach for highly powerful Visual Question Answering (VQA) algorithms. To better understand the performance of these methods, which answer arbitrary questions related to images, this work focuses on an automatic visual Turing test (VTT). That is, we propose an automatic adaptive questioning method, that aims to expose the reasoning behavior of a VQA algorithm. Specifically, we introduce a reinforcement learning (RL) agent that observes the history of previously asked questions, and uses it to select the next question to pose. We demonstrate our approach in the context of evaluating algorithms that automatically answer questions related to diabetic macular edema (DME) grading. The experiments show that such an agent has similar behavior to a clinician, whereby asking questions that are relevant to key clinical concepts.


Assuntos
Diabetes Mellitus , Retinopatia Diabética , Edema Macular , Humanos , Retinopatia Diabética/diagnóstico por imagem , Edema Macular/diagnóstico por imagem , Algoritmos , Aprendizado de Máquina
13.
Eur Phys J Plus ; 138(5): 391, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37192839

RESUMO

Medical imaging has been intensively employed in screening, diagnosis and monitoring during the COVID-19 pandemic. With the improvement of RT-PCR and rapid inspection technologies, the diagnostic references have shifted. Current recommendations tend to limit the application of medical imaging in the acute setting. Nevertheless, efficient and complementary values of medical imaging have been recognized at the beginning of the pandemic when facing unknown infectious diseases and a lack of sufficient diagnostic tools. Optimizing medical imaging for pandemics may still have encouraging implications for future public health, especially for long-lasting post-COVID-19 syndrome theranostics. A critical concern for the application of medical imaging is the increased radiation burden, particularly when medical imaging is used for screening and rapid containment purposes. Emerging artificial intelligence (AI) technology provides the opportunity to reduce the radiation burden while maintaining diagnostic quality. This review summarizes the current AI research on dose reduction for medical imaging, and the retrospective identification of their potential in COVID-19 may still have positive implications for future public health.

14.
EClinicalMedicine ; 55: 101745, 2023 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-36457646

RESUMO

Background: Diagnosing heparin-induced thrombocytopenia (HIT) at the bedside remains challenging, exposing a significant number of patients at risk of delayed diagnosis or overtreatment. We hypothesized that machine-learning algorithms could be utilized to develop a more accurate and user-friendly diagnostic tool that integrates diverse clinical and laboratory information and accounts for complex interactions. Methods: We conducted a prospective cohort study including 1393 patients with suspected HIT between 2018 and 2021 from 10 study centers. Detailed clinical information and laboratory data were collected, and various immunoassays were conducted. The washed platelet heparin-induced platelet activation assay (HIPA) served as the reference standard. Findings: HIPA diagnosed HIT in 119 patients (prevalence 8.5%). The feature selection process in the training dataset (75% of patients) yielded the following predictor variables: (1) immunoassay test result, (2) platelet nadir, (3) unfractionated heparin use, (4) CRP, (5) timing of thrombocytopenia, and (6) other causes of thrombocytopenia. The best performing models were a support vector machine in case of the chemiluminescent immunoassay (CLIA) and the ELISA, as well as a gradient boosting machine in particle-gel immunoassay (PaGIA). In the validation dataset (25% of patients), the AUROC of all models was 0.99 (95% CI: 0.97, 1.00). Compared to the currently recommended diagnostic algorithm (4Ts score, immunoassay), the numbers of false-negative patients were reduced from 12 to 6 (-50.0%; ELISA), 9 to 3 (-66.7%, PaGIA) and 14 to 5 (-64.3%; CLIA). The numbers of false-positive individuals were reduced from 87 to 61 (-29.8%; ELISA), 200 to 63 (-68.5%; PaGIA) and increased from 50 to 63 (+29.0%) for the CLIA. Interpretation: Our user-friendly machine-learning algorithm for the diagnosis of HIT (https://toradi-hit.org) was substantially more accurate than the currently recommended diagnostic algorithm. It has the potential to reduce delayed diagnosis and overtreatment in clinical practice. Future studies shall validate this model in wider settings. Funding: Swiss National Science Foundation (SNSF), and International Society on Thrombosis and Haemostasis (ISTH).

15.
Sci Rep ; 12(1): 22059, 2022 12 21.
Artigo em Inglês | MEDLINE | ID: mdl-36543852

RESUMO

We evaluated the effectiveness of automated segmentation of the liver and its vessels with a convolutional neural network on non-contrast T1 vibe Dixon acquisitions. A dataset of non-contrast T1 vibe Dixon liver magnetic resonance images was labelled slice-by-slice for the outer liver border, portal, and hepatic veins by an expert. A 3D U-Net convolutional neural network was trained with different combinations of Dixon in-phase, opposed-phase, water, and fat reconstructions. The neural network trained with the single-modal in-phase reconstructions achieved a high performance for liver parenchyma (Dice 0.936 ± 0.02), portal veins (0.634 ± 0.09), and hepatic veins (0.532 ± 0.12) segmentation. No benefit of using multi-modal input was observed (p = 1.0 for all experiments), combining in-phase, opposed-phase, fat, and water reconstruction. Accuracy for differentiation between portal and hepatic veins was 99% for portal veins and 97% for hepatic veins in the central region and slightly lower in the peripheral region (91% for portal veins, 80% for hepatic veins). In conclusion, deep learning-based automated segmentation of the liver and its vessels on non-contrast T1 vibe Dixon was highly effective. The single-modal in-phase input achieved the best performance in segmentation and differentiation between portal and hepatic veins.


Assuntos
Fígado , Redes Neurais de Computação , Fígado/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Veia Porta/diagnóstico por imagem , Água , Processamento de Imagem Assistida por Computador/métodos
16.
Nat Commun ; 13(1): 5882, 2022 10 06.
Artigo em Inglês | MEDLINE | ID: mdl-36202816

RESUMO

Despite the potential of deep learning (DL)-based methods in substituting CT-based PET attenuation and scatter correction for CT-free PET imaging, a critical bottleneck is their limited capability in handling large heterogeneity of tracers and scanners of PET imaging. This study employs a simple way to integrate domain knowledge in DL for CT-free PET imaging. In contrast to conventional direct DL methods, we simplify the complex problem by a domain decomposition so that the learning of anatomy-dependent attenuation correction can be achieved robustly in a low-frequency domain while the original anatomy-independent high-frequency texture can be preserved during the processing. Even with the training from one tracer on one scanner, the effectiveness and robustness of our proposed approach are confirmed in tests of various external imaging tracers on different scanners. The robust, generalizable, and transparent DL development may enhance the potential of clinical translation.


Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Tomografia por Emissão de Pósitrons/métodos
17.
Ophthalmologica ; 245(6): 516-527, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36215958

RESUMO

INTRODUCTION: In this retrospective cohort study, we wanted to evaluate the performance and analyze the insights of an artificial intelligence (AI) algorithm in detecting retinal fluid in spectral-domain OCT volume scans from a large cohort of patients with neovascular age-related macular degeneration (AMD) and diabetic macular edema (DME). METHODS: A total of 3,981 OCT volumes from 374 patients with AMD and 11,501 OCT volumes from 811 patients with DME were acquired with Heidelberg-Spectralis OCT device (Heidelberg Engineering Inc., Heidelberg, Germany) between 2013 and 2021. Each OCT volume was annotated for the presence or absence of intraretinal fluid (IRF) and subretinal fluid (SRF) by masked reading center graders (ground truth). The performance of an already published AI algorithm to detect IRF and SRF separately, and a combined fluid detector (IRF and/or SRF) of the same OCT volumes was evaluated. An analysis of the sources of disagreement between annotation and prediction and their relationship to central retinal thickness was performed. We computed the mean areas under the curves (AUC) and under the precision-recall curves (AP), accuracy, sensitivity, specificity, and precision. RESULTS: The AUC for IRF was 0.92 and 0.98, for SRF 0.98 and 0.99, in the AMD and DME cohort, respectively. The AP for IRF was 0.89 and 1.00, for SRF 0.97 and 0.93, in the AMD and DME cohort, respectively. The accuracy, specificity, and sensitivity for IRF were 0.87, 0.88, 0.84, and 0.93, 0.95, 0.93, and for SRF 0.93, 0.93, 0.93, and 0.95, 0.95, 0.95 in the AMD and DME cohort, respectively. For detecting any fluid, the AUC was 0.95 and 0.98, and the accuracy, specificity, and sensitivity were 0.89, 0.93, and 0.90 and 0.95, 0.88, and 0.93, in the AMD and DME cohort, respectively. False positives were present when retinal shadow artifacts and strong retinal deformation were present. False negatives were due to small hyporeflective areas in combination with poor image quality. The combined detector correctly predicted more OCT volumes than the single detectors for IRF and SRF, 89.0% versus 81.6% in the AMD and 93.1% versus 88.6% in the DME cohort. DISCUSSION/CONCLUSION: The AI-based fluid detector achieves high performance for retinal fluid detection in a very large dataset dedicated to AMD and DME. Combining single detectors provides better fluid detection accuracy than considering the single detectors separately. The observed independence of the single detectors ensures that the detectors learned features particular to IRF and SRF.


Assuntos
Diabetes Mellitus , Retinopatia Diabética , Degeneração Macular , Edema Macular , Degeneração Macular Exsudativa , Humanos , Edema Macular/diagnóstico , Retinopatia Diabética/diagnóstico , Tomografia de Coerência Óptica/métodos , Líquido Sub-Retiniano , Estudos Retrospectivos , Inteligência Artificial , Degeneração Macular/diagnóstico , Inibidores da Angiogênese
18.
JACC Cardiovasc Imaging ; 15(7): 1325-1338, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-35592889

RESUMO

Myocarditis represents the entity of an inflamed myocardium and is a diagnostic challenge caused by its heterogeneous presentation. Contemporary noninvasive evaluation of patients with clinically suspected myocarditis using cardiac magnetic resonance (CMR) includes dimensions and function of the heart chambers, conventional T2-weighted imaging, late gadolinium enhancement, novel T1 and T2 mapping, and extracellular volume fraction calculation. CMR feature-tracking, texture analysis, and artificial intelligence emerge as potential modern techniques to further improve diagnosis and prognostication in this clinical setting. This review describes the evidence surrounding different CMR methods and image postprocessing methods and highlights their values for clinical decision making, monitoring, and risk stratification across stages of this condition.


Assuntos
Miocardite , Inteligência Artificial , Meios de Contraste , Gadolínio , Humanos , Imageamento por Ressonância Magnética/métodos , Imagem Cinética por Ressonância Magnética/métodos , Espectroscopia de Ressonância Magnética , Miocardite/patologia , Miocárdio/patologia , Valor Preditivo dos Testes
19.
Eur J Nucl Med Mol Imaging ; 49(9): 3061-3072, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-35226120

RESUMO

PURPOSE: Alzheimer's disease (AD) studies revealed that abnormal deposition of tau spreads in a specific spatial pattern, namely Braak stage. However, Braak staging is based on post mortem brains, each of which represents the cross section of the tau trajectory in disease progression, and numerous studies were reported that do not conform to that model. This study thus aimed to identify the tau trajectory and quantify the tau progression in a data-driven approach with the continuous latent space learned by variational autoencoder (VAE). METHODS: A total of 1080 [18F]Flortaucipir brain positron emission tomography (PET) images were collected from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. VAE was built to compress the hidden features from tau images in latent space. Hierarchical agglomerative clustering and minimum spanning tree (MST) were applied to organize the features and calibrate them to the tau progression, thus deriving pseudo-time. The image-level tau trajectory was inferred by continuously sampling across the calibrated latent features. We assessed the pseudo-time with regard to tau standardized uptake value ratio (SUVr) in AD-vulnerable regions, amyloid deposit, glucose metabolism, cognitive scores, and clinical diagnosis. RESULTS: We identified four clusters that plausibly capture certain stages of AD and organized the clusters in the latent space. The inferred tau trajectory agreed with the Braak staging. According to the derived pseudo-time, tau first deposits in the parahippocampal and amygdala, and then spreads to the fusiform, inferior temporal lobe, and posterior cingulate. Prior to the regional tau deposition, amyloid accumulates first. CONCLUSION: The spatiotemporal trajectory of tau progression inferred in this study was consistent with Braak staging. The profile of other biomarkers in disease progression agreed well with previous findings. We addressed that this approach additionally has the potential to quantify tau progression as a continuous variable by taking a whole-brain tau image into account.


Assuntos
Doença de Alzheimer , Disfunção Cognitiva , Doença de Alzheimer/metabolismo , Encéfalo/metabolismo , Carbolinas , Disfunção Cognitiva/metabolismo , Progressão da Doença , Humanos , Tomografia por Emissão de Pósitrons/métodos , Proteínas tau/metabolismo
20.
Eur J Nucl Med Mol Imaging ; 49(6): 1843-1856, 2022 05.
Artigo em Inglês | MEDLINE | ID: mdl-34950968

RESUMO

PURPOSE: A critical bottleneck for the credibility of artificial intelligence (AI) is replicating the results in the diversity of clinical practice. We aimed to develop an AI that can be independently applied to recover high-quality imaging from low-dose scans on different scanners and tracers. METHODS: Brain [18F]FDG PET imaging of 237 patients scanned with one scanner was used for the development of AI technology. The developed algorithm was then tested on [18F]FDG PET images of 45 patients scanned with three different scanners, [18F]FET PET images of 18 patients scanned with two different scanners, as well as [18F]Florbetapir images of 10 patients. A conditional generative adversarial network (GAN) was customized for cross-scanner and cross-tracer optimization. Three nuclear medicine physicians independently assessed the utility of the results in a clinical setting. RESULTS: The improvement achieved by AI recovery significantly correlated with the baseline image quality indicated by structural similarity index measurement (SSIM) (r = -0.71, p < 0.05) and normalized dose acquisition (r = -0.60, p < 0.05). Our cross-scanner and cross-tracer AI methodology showed utility based on both physical and clinical image assessment (p < 0.05). CONCLUSION: The deep learning development for extensible application on unknown scanners and tracers may improve the trustworthiness and clinical acceptability of AI-based dose reduction.


Assuntos
Aprendizado Profundo , Fluordesoxiglucose F18 , Inteligência Artificial , Encéfalo/diagnóstico por imagem , Humanos , Processamento de Imagem Assistida por Computador , Tomografia por Emissão de Pósitrons/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...